Rates of Convergence in the Source Coding Theorem , in Empirical Quantizer Design , and in Universal Lossy
نویسنده
چکیده
Rate of convergence results are established for vector quantization. Convergence rates are given for a n increasing vector dimension a n d / o r a n increasing training set size. I n particular, the following results are shown for memoryless realvalued sources with bounded support a t transmission rate R: (1) If a vector quantizer with fixed dimension k is designed to minimize the empirical mean-square error (MSE) with respect to m training vectors, then its MSE for the true source converges in expectation and almost surely to the minimum possible MSE as O(J-1; (2) The MSE of a n optimal k-dimensional vector quantizer for the true source converges, a s the dimension grows, to the distortion-rate function D ( R ) as O ( d m ) ; (3) There exists a fixed-rate universal lossy source coding scheme whose per-letter MSE on n real-valued source samples converges in expectation and almost surely to the distortion-rate function D ( R ) as O(\lloglog n / log A 1; (4) Consider a training set of n real-valued source samples blocked into vectors of dimension k, and a k-dimension vector quantizer designed to minimize the empirical MSE with respect to the m = In / k J training vectors. Then the per-letter MSE of this quantizer for the true source converges in expectation and almost surely to the distortion-rate function D(R) a s O(4loglogn / logn 1, if one chooses k = 1(1 / R ) ( 1 ~ ) l o g n ] for any E E (0,I).
منابع مشابه
Rates of convergence in the source coding theorem, in empirical quantizer design, and in universal lossy source coding
Abstruct-Rate of convergence results are established for vector quantization. Convergence rates are given for an increasing vector dimension and/or an increasing training set size. In particular, the following results are shown for memoryless realvalued sources with bounded support at transmission rate R: (1) If a vector quantizer with fixed dimension k is designed to minimize the empirical mea...
متن کاملCase Study: Empirical Quantizer Design
Now that we have safely made our way through the combinatorial forests of Vapnik–Chervonenkis classes, we will look at an interesting application of the VC theory to a problem in communications engineering: empirical design of vector quantizers. Vector quantization is a technique for lossy data compression (or source coding), so we will first review, at a very brisk pace, the basics of source c...
متن کاملEmpirical quantizer design in the presence of source noise or channel noise
The problem of vector quantizer empirical design for noisy channels or for noisy sources is studied. It is shown that the average squared distortion of a vector quantizer designed optimally from observing clean independent and identically distributed (i.i.d.) training vectors converges in expectation, as the training set size grows, to the minimum possible mean-squared error obtainable for quan...
متن کاملEmpirical Quantizer Design in the Presence of Source Noise or Channel Noise - Information Theory, IEEE Transactions on
The problem of vector quantizer empirical design for noisy channels or for noisy sources is studied. It is shown that the average squared distortion of a vector quantizer designed optimally from observing clean independent and identically distributed (i.i.d.) training vectors converges in expectation, as the training set size grows, to the minimum possible mean-squared error obtainable for quan...
متن کاملA vector quantization approach to universal noiseless coding and quantization
A two-stage code is a block code in which each block of data is coded in two stages: the first stage codes the identity of a block code among a collection of codes, and the second stage codes the data using the identified code. The collection of codes may be noiseless codes, fixed-rate quantizers, or variable-rate quantizers. We take a vector quantization approach to two-stage coding, in which ...
متن کامل